WORKING ON ARGS1
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating lottery-wide-resnet20 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 63.8389, Top 1 Accuracy: 101/10000 (1.01%)
accuracy_n for n = 4:  25/2500 (1.00%)
Evaluation: Average loss: 63.8389, Top 1 Accuracy: 101/10000 (1.01%)
Pruning with synflow for 100 epochs.
compression0.0
cifar100
1092960.0 1092960
<All keys matched successfully>
Parameter Sparsity: 1096676/1096676 (1.0000)
FLOP Sparsity: 163328612/163328612 (1.0000)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 2:  8155/10000 (81.55%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (6.644),(1.077),(5.567),(0.80465)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 3:  8929/10000 (89.29%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (6.644),(0.591),(6.053),(0.47727)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 4:  9260/10000 (92.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (6.644),(0.391),(6.253),(0.34577)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 5:  9452/10000 (94.52%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (6.644),(0.288),(6.356),(0.27111)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 7:  9657/10000 (96.57%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (6.644),(0.167),(6.477),(0.18816)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 10:  9773/10000 (97.73%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (6.644),(0.105),(6.539),(0.13795)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 15:  9854/10000 (98.54%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (6.644),(0.061),(6.583),(0.11016)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 40:  9927/10000 (99.27%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (6.644),(0.025),(6.619),(0.06626)
Evaluation: Average loss: 1.6724, Top 1 Accuracy: 6231/10000 (62.31%)
accuracy_n for n = 60:  9960/10000 (99.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (6.644),(0.016),(6.628),(0.03843)
WORKING ON ARGS2
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating lottery-wide-resnet20 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 55.6092, Top 1 Accuracy: 100/10000 (1.00%)
accuracy_n for n = 4:  25/2500 (1.00%)
Evaluation: Average loss: 55.6092, Top 1 Accuracy: 100/10000 (1.00%)
Pruning with synflow for 100 epochs.
compression1.0
cifar100
109296.0 1092960
<All keys matched successfully>
Parameter Sparsity: 113012/1096676 (0.1030)
FLOP Sparsity: 44265104/163328612 (0.2710)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 2:  7261/10000 (72.61%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (6.644),(1.553),(5.091),(0.93073)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 3:  8225/10000 (82.25%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (6.644),(0.964),(5.680),(0.61465)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 4:  8679/10000 (86.79%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (6.644),(0.690),(5.954),(0.46328)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 5:  8945/10000 (89.45%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (6.644),(0.534),(6.110),(0.37367)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 7:  9289/10000 (92.89%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (6.644),(0.335),(6.309),(0.27674)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 10:  9530/10000 (95.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (6.644),(0.208),(6.436),(0.20115)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 15:  9656/10000 (96.56%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (6.644),(0.143),(6.501),(0.16607)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 40:  9831/10000 (98.31%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (6.644),(0.059),(6.585),(0.11676)
Evaluation: Average loss: 1.7690, Top 1 Accuracy: 5288/10000 (52.88%)
accuracy_n for n = 60:  9853/10000 (98.53%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (6.644),(0.046),(6.598),(0.11287)
WORKING ON ARGS3
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating lottery-wide-resnet20 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 113.8963, Top 1 Accuracy: 88/10000 (0.88%)
accuracy_n for n = 4:  24/2500 (0.96%)
Evaluation: Average loss: 113.8963, Top 1 Accuracy: 88/10000 (0.88%)
Pruning with synflow for 100 epochs.
compression2.0
cifar100
10930.0 1092960
<All keys matched successfully>
Parameter Sparsity: 14645/1096676 (0.0134)
FLOP Sparsity: 5839220/163328612 (0.0358)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 2:  2617/10000 (26.17%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (6.644),(3.828),(2.816),(2.97200)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 3:  3043/10000 (30.43%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (6.644),(3.455),(3.189),(2.84123)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 4:  3390/10000 (33.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (6.644),(3.159),(3.485),(2.82772)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 5:  3533/10000 (35.33%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (6.644),(2.988),(3.656),(2.91625)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 7:  3851/10000 (38.51%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (6.644),(2.703),(3.941),(3.08038)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 10:  4045/10000 (40.45%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (6.644),(2.449),(4.195),(3.55541)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 15:  4348/10000 (43.48%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (6.644),(2.188),(4.455),(4.34151)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 40:  4569/10000 (45.69%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (6.644),(1.737),(4.907),(8.78500)
Evaluation: Average loss: 3.3492, Top 1 Accuracy: 1922/10000 (19.22%)
accuracy_n for n = 60:  4557/10000 (45.57%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (6.644),(1.617),(5.026),(12.34265)
WORKING ON ARGS4
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating lottery-wide-resnet20 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 48.1742, Top 1 Accuracy: 100/10000 (1.00%)
accuracy_n for n = 4:  25/2500 (1.00%)
Evaluation: Average loss: 48.1742, Top 1 Accuracy: 100/10000 (1.00%)
Pruning with synflow for 100 epochs.
compression1.5
cifar100
34563.0 1092960
<All keys matched successfully>
Parameter Sparsity: 38278/1096676 (0.0349)
FLOP Sparsity: 15886522/163328612 (0.0973)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 2:  5629/10000 (56.29%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (6.644),(2.390),(4.254),(1.57266)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 3:  6551/10000 (65.51%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (6.644),(1.810),(4.834),(1.20307)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 4:  7214/10000 (72.14%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (6.644),(1.403),(5.241),(0.98689)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 5:  7600/10000 (76.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (6.644),(1.148),(5.496),(0.86051)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 7:  8108/10000 (81.08%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (6.644),(0.836),(5.808),(0.72743)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 10:  8422/10000 (84.22%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (6.644),(0.630),(6.014),(0.67474)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 15:  8718/10000 (87.18%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (6.644),(0.459),(6.185),(0.70600)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 40:  8981/10000 (89.81%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (6.644),(0.286),(6.358),(1.03653)
Evaluation: Average loss: 2.3579, Top 1 Accuracy: 3897/10000 (38.97%)
accuracy_n for n = 60:  9072/10000 (90.72%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (6.644),(0.234),(6.410),(1.34087)
WORKING ON ARGS5
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating imagenet-resnet18 model.
WARNING: ImageNet models do not implement `dense_classifier`.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3139, Top 1 Accuracy: 1018/10000 (10.18%)
accuracy_n for n = 4:  254/2500 (10.16%)
Evaluation: Average loss: 2.3139, Top 1 Accuracy: 1018/10000 (10.18%)
Pruning with synflow for 100 epochs.
compression0.0
cifar10
11172032.0 11172032
<All keys matched successfully>
Parameter Sparsity: 11181642/11181642 (1.0000)
FLOP Sparsity: 37117962/37117962 (1.0000)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 2:  936/1000 (93.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(0.373),(2.949),(0.17812)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 3:  981/1000 (98.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.118),(3.204),(0.05500)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 4:  989/1000 (98.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.057),(3.265),(0.03926)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 5:  996/1000 (99.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.030),(3.292),(0.01606)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 7:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.000),(3.322),(0.00101)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 10:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.000),(3.322),(0.00002)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 15:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 40:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.5173, Top 1 Accuracy: 8347/10000 (83.47%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00000)
WORKING ON ARGS6
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating imagenet-resnet18 model.
WARNING: ImageNet models do not implement `dense_classifier`.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3239, Top 1 Accuracy: 837/10000 (8.37%)
accuracy_n for n = 4:  178/2500 (7.12%)
Evaluation: Average loss: 2.3239, Top 1 Accuracy: 837/10000 (8.37%)
Pruning with synflow for 100 epochs.
compression1.0
cifar10
1117204.0 11172032
<All keys matched successfully>
Parameter Sparsity: 1126813/11181642 (0.1008)
FLOP Sparsity: 16659643/37117962 (0.4488)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 2:  943/1000 (94.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(0.344),(2.978),(0.19496)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 3:  983/1000 (98.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.116),(3.206),(0.05174)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 4:  989/1000 (98.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.075),(3.247),(0.02623)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 5:  996/1000 (99.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.030),(3.292),(0.01597)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 7:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.000),(3.322),(0.00091)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 10:  999/1000 (99.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.008),(3.314),(0.00289)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 15:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 40:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.5317, Top 1 Accuracy: 8245/10000 (82.45%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00000)
WORKING ON ARGS7
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating imagenet-resnet18 model.
WARNING: ImageNet models do not implement `dense_classifier`.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3362, Top 1 Accuracy: 961/10000 (9.61%)
accuracy_n for n = 4:  241/2500 (9.64%)
Evaluation: Average loss: 2.3362, Top 1 Accuracy: 961/10000 (9.61%)
Pruning with synflow for 100 epochs.
compression1.5
cifar10
353291.0 11172032
<All keys matched successfully>
Parameter Sparsity: 362901/11181642 (0.0325)
FLOP Sparsity: 8969739/37117962 (0.2417)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 2:  938/1000 (93.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(0.408),(2.914),(0.20382)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 3:  961/1000 (96.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.240),(3.082),(0.11214)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 4:  987/1000 (98.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.093),(3.229),(0.03734)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 5:  989/1000 (98.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.079),(3.243),(0.02657)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 7:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.000),(3.322),(0.00126)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 10:  999/1000 (99.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.008),(3.314),(0.00296)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 15:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 40:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.6009, Top 1 Accuracy: 8041/10000 (80.41%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00000)
WORKING ON ARGS8
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating imagenet-resnet18 model.
WARNING: ImageNet models do not implement `dense_classifier`.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3219, Top 1 Accuracy: 1026/10000 (10.26%)
accuracy_n for n = 4:  257/2500 (10.28%)
Evaluation: Average loss: 2.3219, Top 1 Accuracy: 1026/10000 (10.26%)
Pruning with synflow for 100 epochs.
compression2.0
cifar10
111721.0 11172032
<All keys matched successfully>
Parameter Sparsity: 121331/11181642 (0.0109)
FLOP Sparsity: 3805160/37117962 (0.1025)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 2:  905/1000 (90.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(0.565),(2.757),(0.25752)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 3:  955/1000 (95.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.271),(3.051),(0.11301)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 4:  978/1000 (97.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.139),(3.183),(0.06798)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 5:  987/1000 (98.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.071),(3.251),(0.04847)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 7:  990/1000 (99.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.048),(3.274),(0.03241)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 10:  999/1000 (99.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.008),(3.314),(0.00422)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 15:  997/1000 (99.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.020),(3.302),(0.00714)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 40:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.000),(3.322),(0.00019)
Evaluation: Average loss: 0.6587, Top 1 Accuracy: 7740/10000 (77.40%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00000)
WORKING ON ARGS9
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating imagenet-resnet18 model.
WARNING: ImageNet models do not implement `dense_classifier`.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3357, Top 1 Accuracy: 1018/10000 (10.18%)
accuracy_n for n = 4:  250/2500 (10.00%)
Evaluation: Average loss: 2.3357, Top 1 Accuracy: 1018/10000 (10.18%)
Pruning with synflow for 100 epochs.
compression3.0
cifar10
11173.0 11172032
<All keys matched successfully>
Parameter Sparsity: 20782/11181642 (0.0019)
FLOP Sparsity: 598229/37117962 (0.0161)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 2:  794/1000 (79.40%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(1.009),(2.313),(0.55841)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 3:  865/1000 (86.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.603),(2.719),(0.36971)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 4:  888/1000 (88.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.485),(2.837),(0.33414)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 5:  911/1000 (91.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.437),(2.885),(0.23853)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 7:  931/1000 (93.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.282),(3.040),(0.18358)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 10:  958/1000 (95.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.185),(3.137),(0.15272)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 15:  971/1000 (97.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.117),(3.205),(0.07874)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 40:  992/1000 (99.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.041),(3.281),(0.03054)
Evaluation: Average loss: 0.9703, Top 1 Accuracy: 6592/10000 (65.92%)
accuracy_n for n = 60:  992/1000 (99.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.041),(3.281),(0.04018)
WORKING ON ARGS10
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating imagenet-resnet18 model.
WARNING: ImageNet models do not implement `dense_classifier`.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3612, Top 1 Accuracy: 769/10000 (7.69%)
accuracy_n for n = 4:  195/2500 (7.80%)
Evaluation: Average loss: 2.3612, Top 1 Accuracy: 769/10000 (7.69%)
Pruning with synflow for 100 epochs.
compression3.75
cifar10
1987.0 11172032
<All keys matched successfully>
Parameter Sparsity: 11596/11181642 (0.0010)
FLOP Sparsity: 207943/37117962 (0.0056)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 2:  662/1000 (66.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(1.551),(1.771),(0.94422)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 3:  699/1000 (69.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(1.356),(1.966),(0.75227)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 4:  780/1000 (78.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.960),(2.362),(0.61374)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 5:  791/1000 (79.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.900),(2.422),(0.53715)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 7:  840/1000 (84.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.675),(2.647),(0.47784)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 10:  844/1000 (84.40%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.606),(2.716),(0.46465)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 15:  900/1000 (90.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.343),(2.979),(0.35608)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 40:  920/1000 (92.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.201),(3.121),(0.56048)
Evaluation: Average loss: 1.3943, Top 1 Accuracy: 4918/10000 (49.18%)
accuracy_n for n = 60:  915/1000 (91.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.204),(3.118),(0.74225)
WORKING ON ARGS11
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating default-conv model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3020, Top 1 Accuracy: 879/10000 (8.79%)
accuracy_n for n = 4:  200/2500 (8.00%)
Evaluation: Average loss: 2.3020, Top 1 Accuracy: 879/10000 (8.79%)
Pruning with synflow for 100 epochs.
compression0.0
cifar10
337760.0 337760
<All keys matched successfully>
Parameter Sparsity: 337834/337834 (1.0000)
FLOP Sparsity: 10715146/10715146 (1.0000)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 2:  857/1000 (85.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(0.820),(2.502),(0.45147)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 3:  930/1000 (93.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.428),(2.894),(0.19183)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 4:  951/1000 (95.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.267),(3.055),(0.13619)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 5:  966/1000 (96.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.208),(3.114),(0.08517)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 7:  991/1000 (99.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.061),(3.261),(0.02549)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 10:  997/1000 (99.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.020),(3.302),(0.00972)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 15:  999/1000 (99.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.008),(3.314),(0.00369)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 40:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.000),(3.322),(0.00000)
Evaluation: Average loss: 0.8292, Top 1 Accuracy: 7129/10000 (71.29%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00000)
WORKING ON ARGS12
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating default-conv model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3056, Top 1 Accuracy: 967/10000 (9.67%)
accuracy_n for n = 4:  226/2500 (9.04%)
Evaluation: Average loss: 2.3056, Top 1 Accuracy: 967/10000 (9.67%)
Pruning with synflow for 100 epochs.
compression1.0
cifar10
33776.0 337760
<All keys matched successfully>
Parameter Sparsity: 33850/337834 (0.1002)
FLOP Sparsity: 7756476/10715146 (0.7239)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 2:  830/1000 (83.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(0.891),(2.431),(0.49628)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 3:  907/1000 (90.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.509),(2.813),(0.25823)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 4:  937/1000 (93.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.336),(2.986),(0.16153)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 5:  953/1000 (95.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.249),(3.073),(0.11247)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 7:  963/1000 (96.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.192),(3.130),(0.11324)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 10:  980/1000 (98.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.115),(3.207),(0.06924)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 15:  985/1000 (98.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.080),(3.242),(0.04284)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 40:  998/1000 (99.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.014),(3.308),(0.00931)
Evaluation: Average loss: 0.9087, Top 1 Accuracy: 6829/10000 (68.29%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00001)
WORKING ON ARGS13
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating default-conv model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3129, Top 1 Accuracy: 838/10000 (8.38%)
accuracy_n for n = 4:  170/2500 (6.80%)
Evaluation: Average loss: 2.3129, Top 1 Accuracy: 838/10000 (8.38%)
Pruning with synflow for 100 epochs.
compression1.5
cifar10
10681.0 337760
<All keys matched successfully>
Parameter Sparsity: 10754/337834 (0.0318)
FLOP Sparsity: 3184101/10715146 (0.2972)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 2:  799/1000 (79.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(1.073),(2.249),(0.54816)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 3:  895/1000 (89.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(0.617),(2.705),(0.29056)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 4:  937/1000 (93.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(0.390),(2.932),(0.17493)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 5:  953/1000 (95.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.312),(3.010),(0.12501)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 7:  970/1000 (97.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.176),(3.146),(0.07250)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 10:  985/1000 (98.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.090),(3.231),(0.04491)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 15:  998/1000 (99.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.016),(3.306),(0.00574)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 40:  999/1000 (99.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.008),(3.314),(0.00141)
Evaluation: Average loss: 1.0289, Top 1 Accuracy: 6379/10000 (63.79%)
accuracy_n for n = 60:  1000/1000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.000),(3.322),(0.00000)
WORKING ON ARGS14
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating default-conv model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3054, Top 1 Accuracy: 1017/10000 (10.17%)
accuracy_n for n = 4:  247/2500 (9.88%)
Evaluation: Average loss: 2.3054, Top 1 Accuracy: 1017/10000 (10.17%)
Pruning with synflow for 100 epochs.
compression2.0
cifar10
3378.0 337760
<All keys matched successfully>
Parameter Sparsity: 3451/337834 (0.0102)
FLOP Sparsity: 1476572/10715146 (0.1378)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 2:  617/1000 (61.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(1.733),(1.589),(0.98172)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 3:  737/1000 (73.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(1.281),(2.041),(0.69296)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 4:  775/1000 (77.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(1.094),(2.228),(0.57772)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 5:  800/1000 (80.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(0.973),(2.349),(0.51012)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 7:  876/1000 (87.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(0.609),(2.713),(0.34544)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 10:  916/1000 (91.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(0.438),(2.884),(0.24118)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 15:  947/1000 (94.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(0.268),(3.054),(0.13562)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 40:  988/1000 (98.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(0.069),(3.252),(0.02836)
Evaluation: Average loss: 1.3909, Top 1 Accuracy: 4936/10000 (49.36%)
accuracy_n for n = 60:  992/1000 (99.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(0.041),(3.281),(0.04725)
WORKING ON ARGS15
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating default-conv model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3114, Top 1 Accuracy: 901/10000 (9.01%)
accuracy_n for n = 4:  217/2500 (8.68%)
Evaluation: Average loss: 2.3114, Top 1 Accuracy: 901/10000 (9.01%)
Pruning with synflow for 100 epochs.
compression3.0
cifar10
338.0 337760
<All keys matched successfully>
Parameter Sparsity: 412/337834 (0.0012)
FLOP Sparsity: 255139/10715146 (0.0238)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 2:  340/1000 (34.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(2.494),(0.828),(1.64571)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 3:  378/1000 (37.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(2.251),(1.071),(1.51911)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 4:  459/1000 (45.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(2.047),(1.275),(1.38170)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 5:  462/1000 (46.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(1.989),(1.333),(1.29906)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 7:  481/1000 (48.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(1.810),(1.512),(1.22429)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 10:  509/1000 (50.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(1.671),(1.651),(1.15242)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 15:  547/1000 (54.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(1.554),(1.768),(1.16584)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 40:  587/1000 (58.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(1.277),(2.045),(1.62024)
Evaluation: Average loss: 1.8978, Top 1 Accuracy: 2815/10000 (28.15%)
accuracy_n for n = 60:  618/1000 (61.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(1.169),(2.153),(2.11870)
WORKING ON ARGS16
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Creating default-conv model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 2.3118, Top 1 Accuracy: 1017/10000 (10.17%)
accuracy_n for n = 4:  259/2500 (10.36%)
Evaluation: Average loss: 2.3118, Top 1 Accuracy: 1017/10000 (10.17%)
Pruning with synflow for 100 epochs.
compression3.75
cifar10
61.0 337760
<All keys matched successfully>
Parameter Sparsity: 135/337834 (0.0004)
FLOP Sparsity: 103457/10715146 (0.0097)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 2:  212/1000 (21.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (3.322),(2.731),(0.591),(1.95741)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 3:  237/1000 (23.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (3.322),(2.599),(0.723),(1.87892)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 4:  261/1000 (26.10%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (3.322),(2.469),(0.853),(1.83366)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 5:  266/1000 (26.60%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (3.322),(2.393),(0.929),(1.77631)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 7:  268/1000 (26.80%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (3.322),(2.297),(1.025),(1.72619)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 10:  302/1000 (30.20%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (3.322),(2.034),(1.288),(1.62497)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 15:  319/1000 (31.90%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (3.322),(1.842),(1.480),(1.63672)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 40:  384/1000 (38.40%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (3.322),(1.537),(1.785),(1.89392)
Evaluation: Average loss: 2.1121, Top 1 Accuracy: 1946/10000 (19.46%)
accuracy_n for n = 60:  403/1000 (40.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (3.322),(1.355),(1.967),(2.26931)
WORKING ON ARGS17
Creating tinyimagenet-resnet18 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 5.4470, Top 1 Accuracy: 53/10000 (0.53%)
accuracy_n for n = 4:  12/2400 (0.50%)
Evaluation: Average loss: 5.4470, Top 1 Accuracy: 53/10000 (0.53%)
Pruning with synflow for 100 epochs.
compression0.0
tiny-imagenet
11261632.0 11261632
<All keys matched successfully>
Parameter Sparsity: 11272456/11272456 (1.0000)
FLOP Sparsity: 2227441864/2227441864 (1.0000)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 2:  14438/20000 (72.19%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (7.644),(1.750),(5.894),(2.34255)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 3:  16570/20000 (82.85%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (7.644),(1.061),(6.583),(1.50268)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 4:  17742/20000 (88.71%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (7.644),(0.684),(6.960),(1.03485)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 5:  18375/20000 (91.88%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (7.644),(0.466),(7.178),(0.78466)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 7:  19088/20000 (95.44%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (7.644),(0.246),(7.398),(0.51056)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 10:  19447/20000 (97.23%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (7.644),(0.127),(7.517),(0.36522)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 15:  19659/20000 (98.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (7.644),(0.065),(7.579),(0.33149)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 40:  19779/20000 (98.89%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (7.644),(0.024),(7.620),(0.46315)
Evaluation: Average loss: 3.9381, Top 1 Accuracy: 5064/10000 (50.64%)
accuracy_n for n = 60:  19800/20000 (99.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (7.644),(0.020),(7.624),(0.56348)
WORKING ON ARGS18
Creating tinyimagenet-resnet18 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 5.6609, Top 1 Accuracy: 50/10000 (0.50%)
accuracy_n for n = 4:  12/2400 (0.50%)
Evaluation: Average loss: 5.6609, Top 1 Accuracy: 50/10000 (0.50%)
Pruning with synflow for 100 epochs.
compression1.0
tiny-imagenet
1126164.0 11261632
<All keys matched successfully>
Parameter Sparsity: 1136987/11272456 (0.1009)
FLOP Sparsity: 892466041/2227441864 (0.4007)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 2:  14489/20000 (72.44%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (7.644),(1.703),(5.941),(1.27917)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 3:  16662/20000 (83.31%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (7.644),(1.021),(6.623),(0.77360)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 4:  17845/20000 (89.22%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (7.644),(0.648),(6.996),(0.51123)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 5:  18528/20000 (92.64%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (7.644),(0.431),(7.212),(0.35340)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 7:  19168/20000 (95.84%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (7.644),(0.223),(7.421),(0.21159)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 10:  19539/20000 (97.69%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (7.644),(0.114),(7.529),(0.12226)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 15:  19761/20000 (98.81%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (7.644),(0.053),(7.591),(0.06351)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 40:  19952/20000 (99.76%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (7.644),(0.009),(7.635),(0.01177)
Evaluation: Average loss: 2.4296, Top 1 Accuracy: 4970/10000 (49.70%)
accuracy_n for n = 60:  20000/20000 (100.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (7.644),(0.000),(7.644),(0.00036)
WORKING ON ARGS19
Creating tinyimagenet-resnet18 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 5.4921, Top 1 Accuracy: 50/10000 (0.50%)
accuracy_n for n = 4:  12/2400 (0.50%)
Evaluation: Average loss: 5.4921, Top 1 Accuracy: 50/10000 (0.50%)
Pruning with synflow for 100 epochs.
compression1.5
tiny-imagenet
356125.0 11261632
<All keys matched successfully>
Parameter Sparsity: 366949/11272456 (0.0326)
FLOP Sparsity: 424788431/2227441864 (0.1907)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 2:  13763/20000 (68.81%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (7.644),(1.904),(5.740),(1.20933)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 3:  15957/20000 (79.78%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (7.644),(1.218),(6.426),(0.76808)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 4:  17355/20000 (86.78%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (7.644),(0.764),(6.880),(0.48997)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 5:  18073/20000 (90.36%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (7.644),(0.536),(7.108),(0.35191)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 7:  18823/20000 (94.11%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (7.644),(0.301),(7.342),(0.22347)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 10:  19294/20000 (96.47%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (7.644),(0.158),(7.486),(0.14994)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 15:  19555/20000 (97.78%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (7.644),(0.090),(7.553),(0.11651)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 40:  19784/20000 (98.92%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (7.644),(0.027),(7.617),(0.06907)
Evaluation: Average loss: 2.2782, Top 1 Accuracy: 4589/10000 (45.89%)
accuracy_n for n = 60:  19700/20000 (98.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (7.644),(0.030),(7.614),(0.06522)
WORKING ON ARGS20
Creating tinyimagenet-resnet18 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 5.4566, Top 1 Accuracy: 45/10000 (0.45%)
accuracy_n for n = 4:  11/2400 (0.46%)
Evaluation: Average loss: 5.4566, Top 1 Accuracy: 45/10000 (0.45%)
Pruning with synflow for 100 epochs.
compression2.0
tiny-imagenet
112617.0 11261632
<All keys matched successfully>
Parameter Sparsity: 123441/11272456 (0.0110)
FLOP Sparsity: 150988501/2227441864 (0.0678)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 2:  11212/20000 (56.06%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (7.644),(2.623),(5.021),(1.70864)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 3:  13634/20000 (68.17%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (7.644),(1.839),(5.805),(1.19042)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 4:  15092/20000 (75.46%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (7.644),(1.380),(6.263),(0.89367)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 5:  16123/20000 (80.61%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (7.644),(1.051),(6.593),(0.72772)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 7:  17279/20000 (86.39%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (7.644),(0.692),(6.952),(0.53048)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 10:  18069/20000 (90.34%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (7.644),(0.439),(7.205),(0.41918)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 15:  18690/20000 (93.45%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (7.644),(0.259),(7.384),(0.36718)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 40:  19329/20000 (96.64%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (7.644),(0.078),(7.566),(0.50476)
Evaluation: Average loss: 2.7106, Top 1 Accuracy: 3611/10000 (36.11%)
accuracy_n for n = 60:  19300/20000 (96.50%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (7.644),(0.070),(7.574),(0.59518)
WORKING ON ARGS21
Creating tinyimagenet-resnet18 model.
Pre-Train for 0 epochs.
Evaluation: Average loss: 5.4958, Top 1 Accuracy: 54/10000 (0.54%)
accuracy_n for n = 4:  12/2400 (0.50%)
Evaluation: Average loss: 5.4958, Top 1 Accuracy: 54/10000 (0.54%)
Pruning with synflow for 100 epochs.
compression2.5
tiny-imagenet
35613.0 11261632
<All keys matched successfully>
Parameter Sparsity: 46437/11272456 (0.0041)
FLOP Sparsity: 53856698/2227441864 (0.0242)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 2:  7991/20000 (39.95%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 2:  (7.644),(3.404),(4.240),(2.33669)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 3:  10197/20000 (50.98%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 3:  (7.644),(2.724),(4.920),(1.82860)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 4:  11474/20000 (57.37%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 4:  (7.644),(2.275),(5.369),(1.55077)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 5:  12536/20000 (62.68%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 5:  (7.644),(1.950),(5.694),(1.37346)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 7:  13944/20000 (69.72%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 7:  (7.644),(1.495),(6.148),(1.16202)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 10:  15141/20000 (75.70%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 10:  (7.644),(1.102),(6.542),(1.01292)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 15:  16204/20000 (81.02%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 15:  (7.644),(0.765),(6.878),(0.92790)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 40:  17460/20000 (87.30%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 40:  (7.644),(0.344),(7.300),(1.25786)
Evaluation: Average loss: 3.2267, Top 1 Accuracy: 2532/10000 (25.32%)
accuracy_n for n = 60:  17600/20000 (88.00%)
Entropy , E[H(Y|Yprime)], information gain, cross-loss for n = 60:  (7.644),(0.274),(7.370),(1.47932)